Video surveillance (CCTV) is a technology that is nowadays deeply woven into the everyday life of many people as one tends to expect it in many varied circumstances (Ossola, 2019). The rationale behind the installation of these systems seems to be very clear for governments. For example, on Buffalo’s (NY) open data website, one can read that “the City of Buffalo deploys a real-time, citywide video surveillance system to augment the public safety efforts of the Buffalo Police Department”. Yet, the development of this new technology, is not exempt from any controversy. For instance, many observers claim that the expansion of video surveillance poses an unregulated threat to privacy (ACLU, 2021). Still, many people seem to be willing to accept this loss in privacy as the surge in video surveillance makes them feel safer (Madden & Rainie, 2015).
Throughout this research, we challenge the widespread belief that people who have “nothing to hide” should be content with the expansion of CCTV networks as the latter makes them safer (Madden & Rainie, 2015). Indeed, on top of many privacy issues linked with this surge in video surveillance systems, one might legitimately ask the question whether these cameras actually make people safer?
The goal of this project in the first phase is to investigate the crime deterrent potential of CCTVs in an Amercian city. This potential will also be compared to the different types of crime that are committed in this area. Over a second phase, the dispersion of CCTVs within the city will be investigated. Indeed, according to some researches, mass surveillance has a stronger impact on communities already disadvantaged by their poverty, race, religion, ethnicity, or immigration status (Gellman & Adler-Bell, 2017). We would like to see whether our data enables us to validate or invalidate this theory. It would also be extremely interesting, even though challenging, to see whether the installation of surveillance systems could potentially create even more pernicious issues such as crime displacements (Waples, Gill & Fisher, 2009).
In sum we argue that, in a world where CCTVs and other surveillance systems are flourishing, it might be beneficial to take a step back and question both the efficacy and the implementation design of such technologies, since they are often portrayed by different stakeholders as miraculous solutions to very complex issues.
Augustin: Augustin obtained a degree in Business Administration at the University of St-Gallen where he had the opportunity to develop a strong interest in digital business ethics. He wrote his bachelor’s thesis on the privacy implications of the use of fear appeals in home surveillance devices’ marketing strategy.
Marine: Marine made a bachelor in Law at the UBO (Université de Bretagne-Occidentale). She is presently into the Master DCS (Droit, Criminalité et Sécurité des technologies de l’information) at the Unversity of Lausanne. Last year, she had the opportunity to take a data protection course and learn more about cyber security and crime in general.
Daniel: Daniel is an exchange student from Koblenz, Germany. Daniel obtained a bachelor’s degree in Business Administration/Management at the WHU - Otto Beisheim School of Management, Germany. He is currently pursuing a Master of Management with focusing on family businesses, entrepreneurship and data science in his courses. Interestingly regarding this project, Daniel spend several months in the United states after high school and thus he can relate to the topic about police violence and crimes in the US.
Firstly, from our respective backgrounds, we derive a strong interest in new technologies and privacy. We believe that every person is entitled to the fundamental right to privacy. Unfortunately, one observes an increasing tendency of governments and other stakeholders (e.g. businesses such as GAFA (Google, Amazon, Facebook, Apple)) to take more and more control in our daily lives through digital technologies such as cameras, computers or smartphones. For these reasons it is interesting to ask ourselves if this massive collection of our data leads to more security or more restrictions of our freedom.
Secondly, if we look at European law like the GDPR, collection and processing of our data must be proportionate to the purpose of that processing. Therefore, it is of our interest to determine if these applications are the same in the United States and to see if the installation of cameras, with the objective of security, really allows to reduce crime and to make a city more secure.
Thirdly, it must also be said that crime and the legislative discussions regarding the right to wear a gun in the United-States are fascinating. At first, it seems as if the freedom to carry a gun makes the US more prone to crimes such as mass shootings. To verify or falsify our hypotheses, we also want to see through the datasets we obtained, what kind of crime prevails in American cities and how it evolves according to the districts and their particularities.
We have six raw data sets. All data sets were retrieved on Baltimore government’s open data portal. We found data about crimes committed in Baltimore, CCTV location in the city and poverty ratesm the population and the households with internet access. We also found a data set showing the reference boundaries of the Community Statistical Area geographies. The latter will certainly be helpful to match each data set’s observations together.
This dataset represents the location and characteristics of major crime against persons such as homicide, shooting, robbery, aggrevated assault etc. within the city of Baltimore. This data set contains 350’294 observations.
RowID = ID of the row, 350’294 in total
CrimeDateTime = date and time of the crime. Format yyyy/mm/dd hh:mm:sstzd
CrimeCode = Code corresponding to the type of crime committed
Location = Textual information on where the crime was committed
Description = Textual description of the crime committed corresponding to a CrimeCode.
Inside/Outside = Provides information on whether crime was committed inside or outside
Weapon = Provides details on what weapon has been used, if any
Post = Number corresponding to the Police Post concerned. A map with corresponding police posts can be found here: http://moit.baltimorecity.gov/sites/default/files/police_districts_w_posts.pdf?__cf_chl_captcha_tk__=pmd_NhnE710SS8QEWdKOyT5Ug6IJZGoF6iIntFYY30vctes-1634309136-0-gqNtZGzNAxCjcnBszQPl
District = Name of the district, regrouping different neighbourhoods. Baltimore is officially divided into nine geographical regions: North, Northeast, East, Southeast, South, Southwest, West, Northwest, and Central.
Neighborhood = Name of the neighborhood in which the crime was committed. Most names matches with neighborhood names contained in the dataset about Community Statistical Areas.
Latitude = Latitude, Coordinate system: EPSG:4326 WGS 84
Longitude = Longitude, Coordinate system: EPSG:4326 WGS 84
GeoLocation = Combination of latitude and longitude, Coordinate system: EPSG:4326 WGS 84
Premise = Information on the premise where the crime was committed. One counts more than 120’000 observations in the streets.
crime_data <- read.csv(file = here::here("data/Baltimore_Part1_Crime_data.csv"))Source of the data set: [https://data.baltimorecity.gov/datasets/part1-crime-data/explore]
This dataset represents closed circuit camera locations capturing activity within 256ft (~2 blocks). It contains 837 observations in total.
X = Longitude: Coordinate system: EPSG:3857 WGS 84 / Pseudo-Mercator
Y = Latitude: Coordinate system: EPSG:3857 WGS 84 / Pseudo-Mercator
OBJECTID = ID of of the camera, 837 in total
CAM_NUM = Unique number attributed to the camera. This might suggest that the data set does not show the location of every camera in Baltimore.
LOCATION = Textual information on where the camera is located
PROJ = Name of the area in which the camera is located. It does not always match the name of the “standard” community statistical areas.
XCCORD = Longitude, Coordinate system: EPSG:4326 WGS 84
YCOORD = Latitude, Coordinate system: EPSG:4326 WGS 84
cctv_data <- read.csv(file = here::here("data/Baltimore_CCTV_Locations_Crime_Cameras.csv"))Source of the data set: [https://data.baltimorecity.gov/datasets/cctv-locations-crime-cameras/explore]
This dataset provides information about the percent of family households living below the poverty line. This indicator measures the percentage of households whose income fell below the poverty threshold out of all households in an area.
Federal and state governments use such estimates to allocate funds to local communities. Local communities use these estimates to identify the number of individuals or families eligible for various programs. These information will be useful for us to study the dispersion of CCTVs within Baltimore in comparison to the poverty level in a given area. This dataset contains 55 observations, one percentage for each community statistical area. There seems to only be one NA. The most relevant variables are the following:
CSA2010 = name of the community statistical area. The Baltimore Data Collaborative and the Baltimore City Department of Planning divided Baltimore into 55 CSAs. These 55 units combine Census Bureau geographies together in ways that match Baltimore’s understanding of community boundaries, and are used in social planning.
hhpov15 - hhpov19 = each these five column contains the percent of Family Households Living Below the Poverty Line for a given year, from 2015 to 2019.
Shape_Area - Shape_Length = standard fields to determine the area and the perimeter of a polygon
poverty_data <- read.csv(file = here::here("data/Percent_of_Family_Households_Living_Below_the_Poverty_Line.csv"))Source of the data set: [https://arcg.is/1qOrnH]
This dataset provides information about the Community Statistical Area geographies for Baltimore City. Based on aggregations of Census tract (2010) geographies. It will serve as a geographical point of reference for us to match each dataset’s observations together. This dataset contains 55 observations, one for each of area. The most relevant variables are the following:
community = name of the community statistical area. The Baltimore Data Collaborative and the Baltimore City Department of Planning divided Baltimore into 55 CSAs. These 55 units combine Census Bureau geographies together in ways that match Baltimore’s understanding of community boundaries, and are used in social planning.
neigh = name of the neighbourhoods contained in the area.
tracts = census tract associated with each neighbourhood. An interactive map of neighborhood statistical areas with census tracts is available online (http://planning.baltimorecity.gov/sites/default/files/Neighborhood%20Statistical%20Areas%20with%20Census%20Tracts.pdf?__cf_chl_captcha_tk__=pmd_5qD.WnCEfWnEa5h1muEPfTVDhN2uheRFagwmglbtKxg-1634299783-0-gqNtZGzNAzujcnBszQO9).
area_data <- read_csv(file = here::here("data/Community_Statistical_Areas__CSAs___Reference_Boundaries.csv"))Source of the data set: [https://data.baltimorecity.gov/datasets/community-statistical-area-1/explore?location=39.284605%2C-76.620550%2C12.26]
This data set provides information about the population in each Community Statistical Area. Information about the total population in 2010 and 2020 are provided. It will be useful to calculate values per capita in each community.The most relevant variables are the following:
community = name of the community statistical area. The Baltimore Data Collaborative and the Baltimore City Department of Planning divided Baltimore into 55 CSAs. These 55 units combine Census Bureau geographies together in ways that match Baltimore’s understanding of community boundaries, and are used in social planning.
tpop20 = total population in for each Community Statistical Area in 2020
population_data <- read.csv(file = here::here("data/Total_Population.csv"))Source of the data set: [INSERT SOURCE HERE]
This data set give information about percentage of households with no internet in each of the 55 Community statistical areas. This information is provided for the years 2017, 2018 and 2019. This will be useful to detect whether there is an relationship between internet access and crimes or CCTV installations in neighborhoods. The most important variables are:
CSA2010 = name of the community statistical area.
nohhint17 = percentage of household in this particular neighboorhood with no internet access in the year 2017.
nohhint18 = percentage of household in this particular neighboorhood with no internet access in the year 2018.
nohhint19 = percentage of household in this particular neighboorhood with no internet access in the year 2019.
Shape_Area = standard fields to determine the area and the perimeter of a polygon
Shape_Lenght = standard fields to determine the area and the perimeter of a polygon
Percent_of_Households_with_No_Internet_at_Home <- read.csv(file = here::here("data/Percent_of_Households_with_No_Internet_at_Home.csv"))Source of the data set: [INSERT SOURCE HERE]
Here, the main goal is the transformation of the area data set into a new data set, which contains one observation per neighborhood. Indeed, it is important to distinguish neighborhoods which are smaller areas from communities, which are larger and often contain several neighborhoods. We achieve that by first creating a new data set with each neighborhood being assigned to a community using separate_rows and second establishing a new columns with lower case letter for later merge.To do so, we combine the mutate function with tolower which convert the uppercase letters of string to a lowercase string.
area_data2 <- separate_rows(area_data, Neigh, sep = ", ") #Creation of a new data set with each neighborhood being assigned to an area
area_data2 <- mutate(area_data2,neigh=tolower(Neigh)) #Creation of new column with lower case lettersAs in the crime data set the neighborhood names are written in lower case letters we again create a column with lower case letters to join the two data sets. We join the area data set and the crime data set using left_join. Next, we use the anti_join function to understand which observation has not matched. The outcome shows all the neighborhoods which did not match. As shown below, the issues mostly come from spelling difference (e.g.: Mount written Mt.). As we have very few observations which do not match, we change the names manually.
crime_data <- mutate(crime_data,neigh=tolower(crime_data$Neighborhood)) #Creation of new column with lower case letters
crime_data_with_areas <- crime_data %>%
left_join(area_data2,by="neigh") #We create a new data sets that contains the name of the area in which the crime was committed
crime_data_NAs <- crime_data %>%
anti_join(area_data2,
by="neigh") #Here is the list of all the NAs we have
unique(crime_data_NAs$neigh) #We see that we have very few unassigned names, we can change this by hand.
crime_data["neigh"][crime_data["neigh"]=="mount washington"] <- "mt. washington"
crime_data["neigh"][crime_data["neigh"]=="carroll - camden industrial area"] <- "caroll-camden industrial area"
crime_data["neigh"][crime_data["neigh"]=="patterson park neighborhood"] <- "patterson park"
crime_data["neigh"][crime_data["neigh"]=="glenham-belhar"] <- "glenham-belford"
crime_data["neigh"][crime_data["neigh"]=="new southwest/mount clare"] <- "hollins market"
crime_data["neigh"][crime_data["neigh"]=="mount winans"] <- "mt. winans"
crime_data["neigh"][crime_data["neigh"]=="rosemont homeowners/tenants"] <- "rosemont"
crime_data["neigh"][crime_data["neigh"]=="broening manor"] <- "o'donnell heights"
crime_data["neigh"][crime_data["neigh"]=="boyd-booth"] <- "booth-boyd"
crime_data["neigh"][crime_data["neigh"]=="lower herring run park"] <- "herring run park"
crime_data["neigh"][crime_data["neigh"]=="mt pleasant park"] <- "mt. pleasant park"
#We got rid of the 764 remaining observations which had no information about neighbourhoodWe get rid of the 764 remaining observations which had no information about neighborhood. This represent a very tiny portion of our total number of observations. Finally, we use the semi join function to create the final data sets which in total is basically the same data set as the original one minus the 764 observations.
Finally, we want to get rid of the observations dating before 2000, as the the Baltimore CCTV program started in the year 2000. We first check the structure of the data set using the str function. We notice that the CrimeDateTime column is not a date. We change that and finally filter the information we want to keep using filter.
crime_data_with_areas <- crime_data %>%
semi_join(area_data2,by="neigh") %>%
left_join(area_data2,by="neigh") #Here we have the final data frame with a community for each crime
str(crime_data_with_areas) # We see that the crime CrimeDateTime column is not a date. We thus convert it.
crime_data_with_areas$CrimeDateTime <- as.Date(crime_data_with_areas$CrimeDateTime)
crime_data_with_areas <- crime_data_with_areas %>% filter(CrimeDateTime >= as.Date("2000-01-01")) #We had 24 observations that dates back to before the year 2000 and 24 observation with no date. We only select crime committed after 2000 as the CCTV program in Baltimore started in 2000.56 areas are included in the standard community statistical area system. However, within these 56 statistical areas is also jail included. For the poverty data however, we obviously have only 55 statistical areas provided, since we obviously do not have data about poverty in jail. To solve this inconsistency, we add a new line. Moreover we needed to fill a missing value for Baltimore in the year 2019: Here we took the average of the past years.
poverty_data <- rbind(poverty_data,list(56,"Unassigned -- Jail",0,0,0,0,0,0,0))
poverty_data[48,7] <- c(poverty_data[48,3],poverty_data[48,4],poverty_data[48,5],poverty_data[48,6]) %>% mean() #The poverty rate of South Baltimore in 19 was missing. This area's rate over the past years seems to be stable (always one of the richest area), that's why we compute the mean of the past 4 years to replace the missing value.This data set seems rather tidy, we will mostly use the first two columns which contain information about the location of each CCTV. Therefore,we still need to make sure to not have any missing values in these two columns. We do so by combination the whichand the is.nafunction and by filtering for potential empty observations.
which(is.na(cctv_data$X))
#> integer(0)
which(is.na(cctv_data$Y))
#> integer(0)
filter(cctv_data, cctv_data$X=="")
#> [1] X Y OBJECTID
#> [4] CAM_NUM NOTES LOCATION
#> [7] PROJ XCOORD YCOORD
#> [10] created_user created_date last_edited_user
#> [13] last_edited_date
#> <0 rows> (or 0-length row.names)
filter(cctv_data, cctv_data$Y=="")
#> [1] X Y OBJECTID
#> [4] CAM_NUM NOTES LOCATION
#> [7] PROJ XCOORD YCOORD
#> [10] created_user created_date last_edited_user
#> [13] last_edited_date
#> <0 rows> (or 0-length row.names)
#We are not sure it is the proper technique but by doing so we ensure that we have no NAs neither empty values and so that our data set is tidy.On the first sight, this Household internet datasets from Baltimore looks very tidy. Nevertheless, we quickly run some code to try filter out missing values or detect anomalies.
sum(is.na(Percent_of_Households_with_No_Internet_at_Home))
#> [1] 0Having examined the sum of NA’s, we see that this dataset is clean and since there are only 55 rows, jail was also automatically not included when configurating this dataset about interenet access (which makes sense, since jail has probably it own internet access, but has no households to count).
The original CCTV data set which we observed had a slight challenge. Although it contained some neighborhood names, most of them were not matching the “standard neighborhood” names. There, to solve that we involved geospatial counting.
Our procedure included the following steps. After reading the table and converting the data into a data table, we define what will be the coordinates of the newly created spatial file. Here we have several types of coordinates, we use X and Y which use the EPSG:3857 WGS 84 / Pseudo-Mercator coordinate system. Spatial files must have coordinate systems assigned to them. In the case at hand, we will work with the above mentioned EPSG:3857 WGS 84 / Pseudo-Mercator coordinate system for all the spatial files that we are going to use. Therefore, to ensure consistency, we create a crs object called crs.geo1 that is going to be assigned to all the spatial files we will use. In order to assign a known crs to spatial data, we use the proj4string function, to which we assign crs.geo1.
#read in data table
balt_dat <- fread(file = here::here("data/Baltimore_CCTV_Locations_Crime_Cameras.csv"))
#convert to data table
balt_dat <- as.data.table(balt_dat)
#make data spatial
coordinates(balt_dat) <- c("X","Y")
crs.geo1 <- CRS("+proj=merc +a=6378137 +b=6378137 +lat_ts=0 +lon_0=0 +x_0=0 +y_0=0 +k=1 +units=m +nadgrids=@null +wktext +no_defs +type=crs")
proj4string(balt_dat) <- crs.geo1 Then we plot to see the output (as cloud of points which represent all the CCTVs).
plot(balt_dat, pch = 20, col = "steelblue") #We can use the plot function to quickly plot the SpatialPointDataFrame that we created. We see a bunch of points which represent the CCTV location in Baltimore.Next, we have to work with the shapefile which is another special type of file. Basically it is a set of polygons which represent different areas of the city Baltimore. We downloaded this file on the Open Baltimore Portal. We read it in and assign this file again to our crs.geo1 coordinate system. In this way we have assured that our files have the same coordinate system.
#read in shapefile of baltimore
baltimore <- readOGR(dsn = here::here("data/Community_Statistical_Area"), layer = "Community_Statistical_Area") #name of file and object
proj4string(baltimore) <- crs.geo1We can now plot these two spatial files together to see the spread of CCTVs over the 56 community statistical areas.
#plot
plot(baltimore,main="Spread of CCTVs in different communities of Baltimore")
plot(balt_dat,pch=20, col="steelblue" , add=TRUE) #If we plot these two lines together, what we obtain is a map of baltimore, we have the 56 community statistical areas and the CCTVs on top of the map.To illustrate these results numerically, we need R to count for us how many CCTV belongs to which area. Here, the function over counts how many CCTVs are layed over a certain polygon frame. Next, we create a new object called counts and make it into a data frame (so that it is easier for us to work with it). We use sum to ensure that we well and truly have 836 observations which were counted. This is the case so we are happy. Still we notice that we only have 41 rows, meaning there here are only 41 out of 56 areas where there are some CCTV.
#Perform the count
proj4string(balt_dat)
proj4string(baltimore) #To be able to perform the count, we must ensure that the two spatial files have a similar CRS. This is the case as we attributed these two files "crs.geo1"
res2 <- over(balt_dat,baltimore) #This function tells you to which community each CCTV belongs to
counts <- table(res2$community)
counts <- as.data.frame(counts)
colnames(counts)[1] <- "Community"
sum(counts$Freq) #We see that we have 836 observation in total, this is a good sign as our initial CCTV data set contained 836 obesrvationsTo make that workable, we need to create a new CCTV file, from which we just add 0 to each N.A.-location. Lastly, we create a new column with the mutate function to calculate the CCTV-density which shows the amount of CCTV per area divided by the total amount of CCTV.
CCTV_per_area <- area_data[2] %>%
left_join(counts,by="Community") #One must add the communities where there are no counts i.e no CCTV
CCTV_per_area[is.na(CCTV_per_area)] <- 0
CCTV_per_area <- mutate(CCTV_per_area, density_perc=(CCTV_per_area$Freq/(sum(CCTV_per_area$Freq)))*100)We now want to map CCTV density on the Baltimore map. We first have to use the piping operator to ensure that the community that we have in the Baltimore data set are the same as the one we are having in the CCTV per are data set. As this only returns true values that means that it works and is good for further analysis.
library(tmap)
baltimore$community %in% CCTV_per_area$CommunityNext, we perform a left_join between the Baltimore shape file and the CCTV per area data set. To hedge against the different writing styles (one time it is written with a capital letter and one time with a small letter), we use the vector in the end. Finally, we create the map with the tmap package. The tmap package somehow works as the ggplot2 package: First, we need to define an element, it always starts with the tm_shape argument, and then you can add with the plus operator as many arguments as you wish. We used the Baltimore shape file, filled it with the density percentage, defined some breaks, set the borders and the finally the layout.
baltimore@data <- left_join(baltimore@data, CCTV_per_area, by = c('community' = 'Community'))
CCTV_dens_map <- tm_shape(baltimore) + tm_fill(col = "density_perc", title ="CCTV density per Area in %", breaks=c(0,1,2,3,4,5,6,7,8,9,10,11)) + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)
tmap_mode("plot")
CCTV_dens_mapWhat we create is the CrimeStatsPerArea. To achieve that we group the crime_data_with_areas data set by community and then use summarize which enables us to compute the crime frequency for each area. Then, using the population data, we can divide the crime frequency by the number of inhabitants in each area. We finally multiply this by 1000 to obtain the crime per 1000 inhabitants. Again, we added one more row in the calculations because we have no values for the prison. To make sure we made no mistake, we add up the CrimeFrequency column to see whether it equals to 349482. This is the case. We can therefore go further confidently.
CrimeStatsPerArea <- crime_data_with_areas %>%
group_by(Community) %>%
summarize(CrimeFrequency=n())
CrimeStatsPerArea <- mutate(CrimeStatsPerArea,CrimePer1000inhabitants=((CrimeStatsPerArea$CrimeFrequency/population_data$tpop20)*1000))
CrimeStatsPerArea <- rbind(CrimeStatsPerArea,list("Unassigned -- Jail",0,0)) #We have no information about crimes committed in jail, yet, the community statistical area encompass 56 area, including jail. In order to ensure consistency, we must add a 56th observation in this data frame.
sum(CrimeStatsPerArea$CrimeFrequency) #The total sum is 349482, which is what we expect
Community_data <- CrimeStatsPerArea[,-2] %>%
left_join(CCTV_per_area,by="Community") %>%
left_join(poverty_data[,c(2,7)],by=c("Community"="CSA2010"))We want to map crimes per capita per community. The methodology is the same as we did for CCTV density. This time, we use the “quantile” method to create category breaks.
library(tmap)
baltimore$community %in% CrimeStatsPerArea$Community #We see that we have a perfect match
baltimore@data <- left_join(baltimore@data, CrimeStatsPerArea, by = c('community' = 'Community'))
Crime_per_capita_map <- tm_shape(baltimore) + tm_fill(col = "CrimePer1000inhabitants", title ="Crime per capita",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)
tmap_mode("plot")
Crime_per_capita_mapTo observe crime per capita per community repartition in Baltimore visually, we decided to use a distorted map. Again, we use the tmap package together with the cartogram_ncont function which basically distort the map based on intensity of crime per capita in each community. Concretely, we want to show that the crime per capita is higher in the city center, compared to the suburban areas. This can be shown quite neatly graphically.
Distorted_Crime_map <- tm_shape(cartogram_ncont(baltimore, "CrimePer1000inhabitants"))+tm_fill(col = "CrimePer1000inhabitants", title ="Crime per capita per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.07) #This map distorts the size of each area depending on their respective crime per capita It is interesting as it enables one to see that higher crime per capita tends to be concentrated in the city center.
tmap_mode("plot")
Distorted_Crime_mapFirst thing we do here is to compute the unique values of the description column of the crime data set. We see that we have 14 types of crime. We want to observe crimes by types, therefore we want to make new classifications. The law consists of three basic classifications of criminal offenses including infractions, misdemeanors, and felonies. In our data set, we have no infractions. The 14 types of crime are divided in this way into the two remaining categories.
unique(crime_data_with_areas$Description)
#We see that we have 14 types of crime. We want to observe crimes by types, therefore we want to make new classifications.The law consists of three basic classifications of criminal offenses including infractions, misdemeanors, and felonies. In our data set, we have infractions.
#Misdemeanor:LARCENY FROM AUTO,COMMON ASSAULT, ROBBERY - COMMERCIAL, LARCENY
#Felony: RAPE, ARSON, HOMICIDE, BURGLARY, AUTO THEFT, ROBBERY - CARJACKING, AGG. ASSAULT, ROBBERY - STREET, ROBBERY - RESIDENCE, SHOOTINGNext we create a data set which is called crime_cat and basically tells you which recorded crime type belongs to which crime category. This data set will be used later to make a left joint with the crime_data_per_area. Finally, we are left with the crime data sets with the area datas et with a new column which concerns whether the crime was a felony or a misdemeanor.
crime_cat <- data.frame(Category=c("Misdemeanor","Felony"), Description=c(c("LARCENY FROM AUTO,COMMON ASSAULT,ROBBERY - COMMERCIAL,LARCENY"),c("RAPE,ARSON,HOMICIDE,BURGLARY,AUTO THEFT,ROBBERY - CARJACKING,AGG. ASSAULT,ROBBERY - STREET,ROBBERY - RESIDENCE,SHOOTING")))
crime_cat <- separate_rows(crime_cat, Description, sep = ",")
crime_cat$Description %in% unique(crime_data_with_areas$Description) #Ensure we have a perfect match
crime_data_with_areas <- crime_data_with_areas %>%
left_join(crime_cat,by="Description") #We had a new variable to our crime data setNext, we compute the Crime_PerCategory_PerArea. Here we use the piping operator and this time we group by the community and category and obtain the results. Again, we check that we indeed have 349482 observations. Moreover, from that we compute both felony and misdemeanors per capita in each community and (again) add the prison line into the newly created data sets.
CrimePerCategoryPerArea <- crime_data_with_areas %>%
group_by(Community,Category) %>%
summarize(RepartitionPerCategoryPerArea=n())
sum(CrimePerCategoryPerArea$RepartitionPerCategoryPerArea) #Again, we check that we indeed have 349482 observations
CrimeCategoryRepartition <- CrimePerCategoryPerArea %>%
group_by(Category) %>%
summarise(Repartition=sum(RepartitionPerCategoryPerArea)) #We observe that in Baltimore, the number of felony is close to the number of misdemeanor
FelonyStats <- CrimePerCategoryPerArea %>% filter(Category=="Felony")
FelonyStats$FelonyPerCapitaPerArea <-((CrimePerCategoryPerArea%>% filter(Category=="Felony"))[[3]]/population_data$tpop20)*1000
FelonyStats[56,] <- list("Unassigned -- Jail","Felony",0,0)
MisdemeanorStats <- CrimePerCategoryPerArea %>% filter(Category=="Misdemeanor")
MisdemeanorStats$MisdemeanorPerCapitaPerArea <-((CrimePerCategoryPerArea%>% filter(Category=="Misdemeanor"))[[3]]/population_data$tpop20)*1000
MisdemeanorStats[56,] <- list("Unassigned -- Jail","Misdemeanor",0,0)
Community_data <- Community_data %>%
left_join(FelonyStats[,-c(2:3)],by="Community") %>%
left_join(MisdemeanorStats[,-c(2:3)],by="Community")As mentioned earlier, it is also possible to divide the crimes committed in Baltimore by ‘type’ of crime. A distinction is generally made between property crime and violent crime. In a property crime, a victim’s property is stolen or destroyed, without the use or threat of force against the victim. Property crimes include burglary and theft as well as vandalism and arson. In a violent crime, a victim is harmed by or threatened with violence. Violent crimes include rape and sexual assault, robbery, assault and murder.
In order determine whether the crimes contained in our crime_data_with_area. We will use a data set once again provided by the Baltimore open data portal. This data set provides information about the crime codes used by the police to categorize crimes. We first import the data set. Then, we compare whether codes are well and truly similar, three crime codes are written with an extra blank space afterward. We correct that. Then, suing the left_join function, we add a new column to our crime_data_with_area data frame. We then wish to create data frames for both violent and property crime. The methodology is the same as we used for felonies and misdemeanors.
crimecode_data <- read.csv(file = here::here("data/Balt_CRIME_CODES.csv"))
unique(crime_data_with_areas$CrimeCode) %in% unique(crimecode_data$CODE) #We identify spelling errors
crimecode_data$CODE[185] <- "8H"
crimecode_data$CODE[186] <- "8I"
crimecode_data$CODE[187] <- "8J"
crime_data_with_areas <- crime_data_with_areas %>%
left_join(crimecode_data[,c(1,8)],by=c("CrimeCode"="CODE"))
unique(crime_data_with_areas$VIO_PROP_CFS)
which(is.na(crime_data_with_areas$VIO_PROP_CFS)) #We ensure that we have no NAs
CrimePerCategory2PerArea <- crime_data_with_areas %>%
group_by(Community,VIO_PROP_CFS) %>%
summarize(RepartitionPerCategory2PerArea=n())
sum(CrimePerCategory2PerArea$RepartitionPerCategory2PerArea) #Again, we check that we indeed have 349482 observations
CrimeCategory2Repartition <- CrimePerCategory2PerArea %>%
group_by(VIO_PROP_CFS) %>%
summarise(Repartition=sum(RepartitionPerCategory2PerArea))
PropertyStats <- CrimePerCategory2PerArea %>% filter(VIO_PROP_CFS=="PROPERTY")
PropertyStats$PropertyCrimePerCapitaPerArea <-((CrimePerCategory2PerArea%>% filter(VIO_PROP_CFS=="PROPERTY"))[[3]]/population_data$tpop20)*1000
PropertyStats[56,] <- list("Unassigned -- Jail","PROPERTY",0,0)
ViolentStats <- CrimePerCategory2PerArea %>% filter(VIO_PROP_CFS=="VIOLENT")
ViolentStats$ViolentCrimePerCapitaPerArea <-((CrimePerCategory2PerArea%>% filter(VIO_PROP_CFS=="VIOLENT"))[[3]]/population_data$tpop20)*1000
ViolentStats[56,] <- list("Unassigned -- Jail","PROPERTY",0,0)
Community_data <- Community_data %>%
left_join(ViolentStats[,c(1,4)],by="Community") %>%
left_join(PropertyStats[,c(1,4)],by="Community")After ensuring that we have a perfect match we perform a left joint for felony and misdemeanor and map everything.
#Felony
baltimore$community %in% FelonyStats$Community
baltimore@data <- left_join(baltimore@data, FelonyStats, by = c('community' = 'Community'))
Felony_map <- tm_shape(baltimore) + tm_fill(col = "FelonyPerCapitaPerArea", title ="Felony per capita per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)
Felony_map
#Misdemeanor
baltimore$community %in% MisdemeanorStats$Community
baltimore@data <- left_join(baltimore@data, MisdemeanorStats, by = c('community' = 'Community'))
Misdemeanor_map <- tm_shape(baltimore) + tm_fill(col = "MisdemeanorPerCapitaPerArea", title ="Misdemeanor per capita per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)
Misdemeanor_mapThe idea is that we want to get information about how crime evolved. Here we could have done a loop, but could not yet find a way to properly do it. We have created a data set for each year. The results are interesting. If we compare how many observations we have in each crime-per year data sets, we see that we have ~40.000ish cases a year except from 2020 (which is due to COVID) and the year 2021 (which is not finished. We don’t make any datasets for the year 2013 and below, because we see that we have not many observations which date prior to the year 2013. The graph represent the monthly evolution of crime for each year. We see that there seems to be a sort of pattern and that, each year, crime increases mid-year before decreasing in december.
Crime_in_2021 <- crime_data_with_areas %>% filter(CrimeDateTime >= as.Date("2021-01-01") & CrimeDateTime <= as.Date("2021-12-31"))
Crime_in_2020 <- crime_data_with_areas %>% filter(CrimeDateTime >= as.Date("2020-01-01") & CrimeDateTime <= as.Date("2020-12-31"))
Crime_in_2019 <- crime_data_with_areas %>% filter(CrimeDateTime >= as.Date("2019-01-01") & CrimeDateTime <= as.Date("2019-12-31"))
Crime_in_2018 <- crime_data_with_areas %>% filter(CrimeDateTime >= as.Date("2018-01-01") & CrimeDateTime <= as.Date("2018-12-31"))
Crime_in_2017 <- crime_data_with_areas %>% filter(CrimeDateTime >= as.Date("2017-01-01") & CrimeDateTime <= as.Date("2017-12-31"))
Crime_in_2016 <- crime_data_with_areas %>% filter(CrimeDateTime >= as.Date("2016-01-01") & CrimeDateTime <= as.Date("2016-12-31"))
Crime_in_2015 <- crime_data_with_areas %>% filter(CrimeDateTime >= as.Date("2015-01-01") & CrimeDateTime <= as.Date("2015-12-31"))
Crime_in_2014 <- crime_data_with_areas %>% filter(CrimeDateTime >= as.Date("2014-01-01") & CrimeDateTime <= as.Date("2014-12-31"))
crime_data_with_areas %>% filter(CrimeDateTime < as.Date("2014-01-01")) #We see that we have very few (76) observations before 2014, thus we do not consider them
Crime_Monthly_evolution_map <- crime_data_with_areas %>%
count(month=floor_date(CrimeDateTime,"month")) %>%
ggplot(aes(month,n))+geom_line()+
scale_x_date(limits = c(as.Date("2014-01-01"), as.Date("2021-08-31"))) #This enables us to see how crime evolve, month after month
Crime_Monthly_evolution_mapNext, we calculate the crime per capita for each year with the piping operator, grouping by community and summarize the rates. In the end we create the crime evolution data sets which is a combination of all the data.
#_____ Calculations of the crime rates
CrimePerCapitaPerArea2021 <- Crime_in_2021 %>%
group_by(Community) %>%
summarize(CrimeFrequency21=n())
CrimePerCapitaPerArea2021 <- mutate(CrimePerCapitaPerArea2021,CrimePer1000inhabitants21=((CrimePerCapitaPerArea2021$CrimeFrequency21/population_data$tpop20)*1000))
CrimePerCapitaPerArea2021 <- rbind(CrimePerCapitaPerArea2021,list("Unassigned -- Jail",0,0))
CrimePerCapitaPerArea2020 <- Crime_in_2020 %>%
group_by(Community) %>%
summarize(CrimeFrequency20=n())
CrimePerCapitaPerArea2020 <- mutate(CrimePerCapitaPerArea2020,CrimePer1000inhabitants20=((CrimePerCapitaPerArea2020$CrimeFrequency20/population_data$tpop20)*1000))
CrimePerCapitaPerArea2020 <- rbind(CrimePerCapitaPerArea2020,list("Unassigned -- Jail",0,0))
CrimePerCapitaPerArea2019 <- Crime_in_2019 %>%
group_by(Community) %>%
summarize(CrimeFrequency19=n())
CrimePerCapitaPerArea2019 <- mutate(CrimePerCapitaPerArea2019,CrimePer1000inhabitants19=((CrimePerCapitaPerArea2019$CrimeFrequency19/population_data$tpop20)*1000))
CrimePerCapitaPerArea2019 <- rbind(CrimePerCapitaPerArea2019,list("Unassigned -- Jail",0,0))
CrimePerCapitaPerArea2018 <- Crime_in_2018 %>%
group_by(Community) %>%
summarize(CrimeFrequency18=n())
CrimePerCapitaPerArea2018 <- mutate(CrimePerCapitaPerArea2018,CrimePer1000inhabitants18=((CrimePerCapitaPerArea2018$CrimeFrequency18/population_data$tpop20)*1000))
CrimePerCapitaPerArea2018 <- rbind(CrimePerCapitaPerArea2018,list("Unassigned -- Jail",0,0))
CrimePerCapitaPerArea2017 <- Crime_in_2017 %>%
group_by(Community) %>%
summarize(CrimeFrequency17=n())
CrimePerCapitaPerArea2017 <- mutate(CrimePerCapitaPerArea2017,CrimePer1000inhabitants17=((CrimePerCapitaPerArea2017$CrimeFrequency17/population_data$tpop20)*1000))
CrimePerCapitaPerArea2017 <- rbind(CrimePerCapitaPerArea2017,list("Unassigned -- Jail",0,0))
CrimePerCapitaPerArea2016 <- Crime_in_2016 %>%
group_by(Community) %>%
summarize(CrimeFrequency16=n())
CrimePerCapitaPerArea2016 <- mutate(CrimePerCapitaPerArea2016,CrimePer1000inhabitants16=((CrimePerCapitaPerArea2016$CrimeFrequency16/population_data$tpop20)*1000))
CrimePerCapitaPerArea2016 <- rbind(CrimePerCapitaPerArea2016,list("Unassigned -- Jail",0,0))
CrimePerCapitaPerArea2015 <- Crime_in_2015 %>%
group_by(Community) %>%
summarize(CrimeFrequency15=n())
CrimePerCapitaPerArea2015 <- mutate(CrimePerCapitaPerArea2015,CrimePer1000inhabitants15=((CrimePerCapitaPerArea2015$CrimeFrequency15/population_data$tpop20)*1000))
CrimePerCapitaPerArea2015 <- rbind(CrimePerCapitaPerArea2015,list("Unassigned -- Jail",0,0))
CrimePerCapitaPerArea2014 <- Crime_in_2014 %>%
group_by(Community) %>%
summarize(CrimeFrequency14=n())
CrimePerCapitaPerArea2014 <- mutate(CrimePerCapitaPerArea2014,CrimePer1000inhabitants14=((CrimePerCapitaPerArea2014$CrimeFrequency14/population_data$tpop20)*1000))
CrimePerCapitaPerArea2014 <- rbind(CrimePerCapitaPerArea2014,list("Unassigned -- Jail",0,0))
crime_evolution <- CrimePerCapitaPerArea2021 %>%
left_join(CrimePerCapitaPerArea2020,by="Community") %>%
left_join(CrimePerCapitaPerArea2019,by="Community") %>%
left_join(CrimePerCapitaPerArea2018,by="Community") %>%
left_join(CrimePerCapitaPerArea2017,by="Community") %>%
left_join(CrimePerCapitaPerArea2016,by="Community") %>%
left_join(CrimePerCapitaPerArea2015,by="Community") %>%
left_join(CrimePerCapitaPerArea2014,by="Community")
Community_data <- Community_data %>%
left_join(crime_evolution,by="Community")Another interesting way to visualise how crime evolved is by using an animated map. We can create animated maps using the tmap_animation function. Yet, in order to be in position to use it, we have to create a very particular tibble. In the case at hand, we want our animated map to display crime per capita evolution over 7 years (from 2014 to 2020, we get ride of 2021 as the year is not complete). Therefore, we must have 7 x 56 observations, one crime per capita value for each year, for each 56 area. Yet, the tibble becomes a bit more peculiar as for each observation, we have to add a in a separate column, a polygon (which is an S4 element) corresponding to the area in question. It is not possible to use a function like the rep function to replicate S4 elements, therefore, we had to do that manually.
Once the tibble is built, we want to merge the data contained in it in a SpatialPolygonsDataFrame. We want to use the baltimore SpatialPolygonsDataFrame.However, as the tibble contains 392 observations, this will enlarge our our SpatialPolygonsDataFrame. As the baltimore object is also used for other purposes, we create an alias. Then, we merge the newly created tibble with the newly created alias, simply using left_join. We create the bbox object as well as an object called pb. The first element allows us to delimit the geographical area of interest and the second allows us to create custom classes. Finally, we crate a map using the tm_shape function. We animate the latter using tmap_animation.
anim_tibble <- tibble(Year=rep(2020:2014,56),Community=rep(Community_data$Community,each=7),CrimeRate=as.vector(t(crime_evolution[,-c(1,2,3,4,6,8,10,12,14,16)])),geometry=list(
baltimore@polygons[[1]],baltimore@polygons[[1]],baltimore@polygons[[1]],baltimore@polygons[[1]],baltimore@polygons[[1]],baltimore@polygons[[1]],baltimore@polygons[[1]],
baltimore@polygons[[2]],baltimore@polygons[[2]],baltimore@polygons[[2]],baltimore@polygons[[2]],baltimore@polygons[[2]],baltimore@polygons[[2]],baltimore@polygons[[2]],
baltimore@polygons[[3]],baltimore@polygons[[3]],baltimore@polygons[[3]],baltimore@polygons[[3]],baltimore@polygons[[3]],baltimore@polygons[[3]],baltimore@polygons[[3]],
baltimore@polygons[[4]],baltimore@polygons[[4]],baltimore@polygons[[4]],baltimore@polygons[[4]],baltimore@polygons[[4]],baltimore@polygons[[4]],baltimore@polygons[[4]],
baltimore@polygons[[5]],baltimore@polygons[[5]],baltimore@polygons[[5]],baltimore@polygons[[5]],baltimore@polygons[[5]],baltimore@polygons[[5]],baltimore@polygons[[5]],
baltimore@polygons[[6]],baltimore@polygons[[6]],baltimore@polygons[[6]],baltimore@polygons[[6]],baltimore@polygons[[6]],baltimore@polygons[[6]],baltimore@polygons[[6]],
baltimore@polygons[[7]],baltimore@polygons[[7]],baltimore@polygons[[7]],baltimore@polygons[[7]],baltimore@polygons[[7]],baltimore@polygons[[7]],baltimore@polygons[[7]],
baltimore@polygons[[8]],baltimore@polygons[[8]],baltimore@polygons[[8]],baltimore@polygons[[8]],baltimore@polygons[[8]],baltimore@polygons[[8]],baltimore@polygons[[8]],
baltimore@polygons[[9]],baltimore@polygons[[9]],baltimore@polygons[[9]],baltimore@polygons[[9]],baltimore@polygons[[9]],baltimore@polygons[[9]],baltimore@polygons[[9]],
baltimore@polygons[[10]],baltimore@polygons[[10]],baltimore@polygons[[10]],baltimore@polygons[[10]],baltimore@polygons[[10]],baltimore@polygons[[10]],baltimore@polygons[[10]],
baltimore@polygons[[11]],baltimore@polygons[[11]],baltimore@polygons[[11]],baltimore@polygons[[11]],baltimore@polygons[[11]],baltimore@polygons[[11]],baltimore@polygons[[11]],
baltimore@polygons[[12]],baltimore@polygons[[12]],baltimore@polygons[[12]],baltimore@polygons[[12]],baltimore@polygons[[12]],baltimore@polygons[[12]],baltimore@polygons[[12]],
baltimore@polygons[[13]],baltimore@polygons[[13]],baltimore@polygons[[13]],baltimore@polygons[[13]],baltimore@polygons[[13]],baltimore@polygons[[13]],baltimore@polygons[[13]],
baltimore@polygons[[14]],baltimore@polygons[[14]],baltimore@polygons[[14]],baltimore@polygons[[14]],baltimore@polygons[[14]],baltimore@polygons[[14]],baltimore@polygons[[14]],
baltimore@polygons[[15]],baltimore@polygons[[15]],baltimore@polygons[[15]],baltimore@polygons[[15]],baltimore@polygons[[15]],baltimore@polygons[[15]],baltimore@polygons[[15]],
baltimore@polygons[[16]],baltimore@polygons[[16]],baltimore@polygons[[16]],baltimore@polygons[[16]],baltimore@polygons[[16]],baltimore@polygons[[16]],baltimore@polygons[[16]],
baltimore@polygons[[17]],baltimore@polygons[[17]],baltimore@polygons[[17]],baltimore@polygons[[17]],baltimore@polygons[[17]],baltimore@polygons[[17]],baltimore@polygons[[17]],
baltimore@polygons[[18]],baltimore@polygons[[18]],baltimore@polygons[[18]],baltimore@polygons[[18]],baltimore@polygons[[18]],baltimore@polygons[[18]],baltimore@polygons[[18]],
baltimore@polygons[[19]],baltimore@polygons[[19]],baltimore@polygons[[19]],baltimore@polygons[[19]],baltimore@polygons[[19]],baltimore@polygons[[19]],baltimore@polygons[[19]],
baltimore@polygons[[20]],baltimore@polygons[[20]],baltimore@polygons[[20]],baltimore@polygons[[20]],baltimore@polygons[[20]],baltimore@polygons[[20]],baltimore@polygons[[20]],
baltimore@polygons[[21]],baltimore@polygons[[21]],baltimore@polygons[[21]],baltimore@polygons[[21]],baltimore@polygons[[21]],baltimore@polygons[[21]],baltimore@polygons[[21]],
baltimore@polygons[[22]],baltimore@polygons[[22]],baltimore@polygons[[22]],baltimore@polygons[[22]],baltimore@polygons[[22]],baltimore@polygons[[22]],baltimore@polygons[[22]],
baltimore@polygons[[23]],baltimore@polygons[[23]],baltimore@polygons[[23]],baltimore@polygons[[23]],baltimore@polygons[[23]],baltimore@polygons[[23]],baltimore@polygons[[23]],
baltimore@polygons[[24]],baltimore@polygons[[24]],baltimore@polygons[[24]],baltimore@polygons[[24]],baltimore@polygons[[24]],baltimore@polygons[[24]],baltimore@polygons[[24]],
baltimore@polygons[[25]],baltimore@polygons[[25]],baltimore@polygons[[25]],baltimore@polygons[[25]],baltimore@polygons[[25]],baltimore@polygons[[25]],baltimore@polygons[[25]],
baltimore@polygons[[26]],baltimore@polygons[[26]],baltimore@polygons[[26]],baltimore@polygons[[26]],baltimore@polygons[[26]],baltimore@polygons[[26]],baltimore@polygons[[26]],
baltimore@polygons[[27]],baltimore@polygons[[27]],baltimore@polygons[[27]],baltimore@polygons[[27]],baltimore@polygons[[27]],baltimore@polygons[[27]],baltimore@polygons[[27]],
baltimore@polygons[[28]],baltimore@polygons[[28]],baltimore@polygons[[28]],baltimore@polygons[[28]],baltimore@polygons[[28]],baltimore@polygons[[28]],baltimore@polygons[[28]],
baltimore@polygons[[29]],baltimore@polygons[[29]],baltimore@polygons[[29]],baltimore@polygons[[29]],baltimore@polygons[[29]],baltimore@polygons[[29]],baltimore@polygons[[29]],
baltimore@polygons[[30]],baltimore@polygons[[30]],baltimore@polygons[[30]],baltimore@polygons[[30]],baltimore@polygons[[30]],baltimore@polygons[[30]],baltimore@polygons[[30]],
baltimore@polygons[[31]],baltimore@polygons[[31]],baltimore@polygons[[31]],baltimore@polygons[[31]],baltimore@polygons[[31]],baltimore@polygons[[31]],baltimore@polygons[[31]],
baltimore@polygons[[32]],baltimore@polygons[[32]],baltimore@polygons[[32]],baltimore@polygons[[32]],baltimore@polygons[[32]],baltimore@polygons[[32]],baltimore@polygons[[32]],
baltimore@polygons[[33]],baltimore@polygons[[33]],baltimore@polygons[[33]],baltimore@polygons[[33]],baltimore@polygons[[33]],baltimore@polygons[[33]],baltimore@polygons[[33]],
baltimore@polygons[[34]],baltimore@polygons[[34]],baltimore@polygons[[34]],baltimore@polygons[[34]],baltimore@polygons[[34]],baltimore@polygons[[34]],baltimore@polygons[[34]],
baltimore@polygons[[35]],baltimore@polygons[[35]],baltimore@polygons[[35]],baltimore@polygons[[35]],baltimore@polygons[[35]],baltimore@polygons[[35]],baltimore@polygons[[35]],
baltimore@polygons[[36]],baltimore@polygons[[36]],baltimore@polygons[[36]],baltimore@polygons[[36]],baltimore@polygons[[36]],baltimore@polygons[[36]],baltimore@polygons[[36]],
baltimore@polygons[[37]],baltimore@polygons[[37]],baltimore@polygons[[37]],baltimore@polygons[[37]],baltimore@polygons[[37]],baltimore@polygons[[37]],baltimore@polygons[[37]],
baltimore@polygons[[38]],baltimore@polygons[[38]],baltimore@polygons[[38]],baltimore@polygons[[38]],baltimore@polygons[[38]],baltimore@polygons[[38]],baltimore@polygons[[38]],
baltimore@polygons[[39]],baltimore@polygons[[39]],baltimore@polygons[[39]],baltimore@polygons[[39]],baltimore@polygons[[39]],baltimore@polygons[[39]],baltimore@polygons[[39]],
baltimore@polygons[[40]],baltimore@polygons[[40]],baltimore@polygons[[40]],baltimore@polygons[[40]],baltimore@polygons[[40]],baltimore@polygons[[40]],baltimore@polygons[[40]],
baltimore@polygons[[41]],baltimore@polygons[[41]],baltimore@polygons[[41]],baltimore@polygons[[41]],baltimore@polygons[[41]],baltimore@polygons[[41]],baltimore@polygons[[41]],
baltimore@polygons[[42]],baltimore@polygons[[42]],baltimore@polygons[[42]],baltimore@polygons[[42]],baltimore@polygons[[42]],baltimore@polygons[[42]],baltimore@polygons[[42]],
baltimore@polygons[[43]],baltimore@polygons[[43]],baltimore@polygons[[43]],baltimore@polygons[[43]],baltimore@polygons[[43]],baltimore@polygons[[43]],baltimore@polygons[[43]],
baltimore@polygons[[44]],baltimore@polygons[[44]],baltimore@polygons[[44]],baltimore@polygons[[44]],baltimore@polygons[[44]],baltimore@polygons[[44]],baltimore@polygons[[44]],
baltimore@polygons[[45]],baltimore@polygons[[45]],baltimore@polygons[[45]],baltimore@polygons[[45]],baltimore@polygons[[45]],baltimore@polygons[[45]],baltimore@polygons[[45]],
baltimore@polygons[[46]],baltimore@polygons[[46]],baltimore@polygons[[46]],baltimore@polygons[[46]],baltimore@polygons[[46]],baltimore@polygons[[46]],baltimore@polygons[[46]],
baltimore@polygons[[47]],baltimore@polygons[[47]],baltimore@polygons[[47]],baltimore@polygons[[47]],baltimore@polygons[[47]],baltimore@polygons[[47]],baltimore@polygons[[47]],
baltimore@polygons[[48]],baltimore@polygons[[48]],baltimore@polygons[[48]],baltimore@polygons[[48]],baltimore@polygons[[48]],baltimore@polygons[[48]],baltimore@polygons[[48]],
baltimore@polygons[[49]],baltimore@polygons[[49]],baltimore@polygons[[49]],baltimore@polygons[[49]],baltimore@polygons[[49]],baltimore@polygons[[49]],baltimore@polygons[[49]],
baltimore@polygons[[50]],baltimore@polygons[[50]],baltimore@polygons[[50]],baltimore@polygons[[50]],baltimore@polygons[[50]],baltimore@polygons[[50]],baltimore@polygons[[50]],
baltimore@polygons[[51]],baltimore@polygons[[51]],baltimore@polygons[[51]],baltimore@polygons[[51]],baltimore@polygons[[51]],baltimore@polygons[[51]],baltimore@polygons[[51]],
baltimore@polygons[[52]],baltimore@polygons[[52]],baltimore@polygons[[52]],baltimore@polygons[[52]],baltimore@polygons[[52]],baltimore@polygons[[52]],baltimore@polygons[[52]],
baltimore@polygons[[53]],baltimore@polygons[[53]],baltimore@polygons[[53]],baltimore@polygons[[53]],baltimore@polygons[[53]],baltimore@polygons[[53]],baltimore@polygons[[53]],
baltimore@polygons[[54]],baltimore@polygons[[54]],baltimore@polygons[[54]],baltimore@polygons[[54]],baltimore@polygons[[54]],baltimore@polygons[[54]],baltimore@polygons[[54]],
baltimore@polygons[[55]],baltimore@polygons[[55]],baltimore@polygons[[55]],baltimore@polygons[[55]],baltimore@polygons[[55]],baltimore@polygons[[55]],baltimore@polygons[[55]],
baltimore@polygons[[56]],baltimore@polygons[[56]],baltimore@polygons[[56]],baltimore@polygons[[56]],baltimore@polygons[[56]],baltimore@polygons[[56]],baltimore@polygons[[56]]))
baltimore_alias <- baltimore
baltimore_alias@polygons <- anim_tibble$geometry
baltimore_alias@data$community %in% anim_tibble$Community #Again, we ensure that we have a perfect match
baltimore_alias@data <-left_join(baltimore_alias@data,anim_tibble,by = c('community' = 'Community'))
bbox <- baltimore@bbox
pb <- c(0,25,50,75,100,125,150,175,200,225,250)
animated_crime_map <- tm_shape(baltimore_alias,bbox = bbox, projection = crs.geo1) +
tm_polygons("CrimeRate",breaks=pb) +
tm_facets(free.scales.fill = F,along = "Year")+tm_shape(baltimore)+tm_borders()
tmap_animation(animated_crime_map, delay=100)First, we need to merge the data in one big file. Here every datasets needs one colums which is named in the same way (e.g. Community). Here we create a table with the following colums: Community Statistical Area, Internet accessibility CSA,CCTV per area, CrimeStatsPerArea,crime_data_with_areas, FelonyStats, MisdemeanorStats, . Merging these files in one excel will help up to get a overview table and enable us to do some regressions to generate meaningful output of it. Here we simply merge the files by their same column, namely the community statistical area.
The idea here is to see whether there is any relationship between crime per capita and CCTV density. To do so, we first create a simple linear regression model. We create a new data frame called CCTV_VS_crimes (which basically is a left joint). The linear regression indicates a rather weak correlation between higher crime per capita and higher CCTV density. The \(R^2\) is at 35%. Still, plotting the observations enables one to see a tendency. The blue line represent the regression line.
#>
#> Call:
#> lm(formula = CCTV_VS_crimes$density_perc ~ CCTV_VS_crimes$CrimePer1000inhabitants)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -3.995 -1.043 -0.394 0.789 4.756
#>
#> Coefficients:
#> Estimate Std. Error t value
#> (Intercept) -1.162195 0.520924 -2.23
#> CCTV_VS_crimes$CrimePer1000inhabitants 0.004710 0.000739 6.37
#> Pr(>|t|)
#> (Intercept) 0.03 *
#> CCTV_VS_crimes$CrimePer1000inhabitants 4.3e-08 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 1.79 on 54 degrees of freedom
#> Multiple R-squared: 0.429, Adjusted R-squared: 0.419
#> F-statistic: 40.6 on 1 and 54 DF, p-value: 4.26e-08
In these section we engage with the mapping of the CCTVs and crimes. The method is the same as before with the tmap package. However, this time we have two different shapes: tm_shape(baltimore) which constitutes the base map and tm_shape(balt_dat) which adds a layer containing points. If we take a look at this map we see that it gives an intuition about the phenomenon we illustrated before. It seems as if where crime per capita is the lowest, there seems to be less CCTVs (for instance in the north area of the city or even in the western areas). There seems to be a correlation between the dark red areas and the CCTV per area.
Crime_and_CCTV_map <- tm_shape(baltimore) + tm_fill(col = "CrimePer1000inhabitants", title ="Crime per capita per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05)+ tm_shape(balt_dat) + tm_dots(col="black")
Felony_and_CCTV_map <- tm_shape(baltimore) + tm_fill(col = "FelonyPerCapitaPerArea", title ="Felony per capita per Area in %", style = "quantile") + tm_borders(col="black",alpha=0.3)+ tm_layout(inner.margins = 0.05) + tm_shape(balt_dat) + tm_dots(col="black")
Misdemeanor_and_CCTV_map <- tm_shape(baltimore) + tm_fill(col = "MisdemeanorPerCapitaPerArea", title ="Misdemeanor per capita per Area in %",style = "quantile") + tm_borders(col="black",alpha=0.3)+ tm_layout(inner.margins = 0.05) + tm_shape(balt_dat) + tm_dots(col="black")
tmap_mode("view") #Use this command to have interactive maps
baltimore@data[["fid"]]<-baltimore@data[["community"]] #We do that so that we see the name of the Community when using an interactive map
tmap_arrange(Crime_and_CCTV_map,Felony_and_CCTV_map,Misdemeanor_and_CCTV_map)This map shows quite well that CCTV placement seems to follow the areas where crime per capita is the highest. Looking at the north-western and south-western areas of the map, it can be seen that the placement of CCTVs aligns rather well with the areas considered dangerous.
Crime_per_capita_VS_CCTV_map <- tm_shape(baltimore) + tm_fill(col = "CrimePer1000inhabitants", title ="Crime per capita",style = "quantile") + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.05) + tm_shape(balt_dat) + tm_dots(col="black")
tmap_mode("plot")
Crime_per_capita_VS_CCTV_mapWe are still not sure whether we should use the automatic breaking feature of tmap of whether we should set personalized map breaks. The following chunk illustrates how we could create personalized break arguments.
sort(baltimore@data[["CrimePer1000inhabitants"]])
breaks1 <- c(0,250,500,750,1000,1250,1500) #Not sure what break to use, for the moment I decided to use the automatic break system with the "quantile" parameter
tmap_mode("plot") #We go back to classic plottingWe are still trying to see whether the presence of CCTV can deter crime. One thing that could be interesting is to spatially locate crime and compare it to CCTV location. We know that CCTCs capture activities within 256ft (~2 blocks). We will only select crime committed in August 2021 to have intereprtable data (choosing a larger time frame would make the map unreadable). We choose August 2021 because it is the latest full month which we have in our data set. Taking the latest time points from the data assures us that most of the CCTVs presented in the data set were already there (since we have no information of when exactly these CCTVs were added). Again, as before, we create a data table, assign coordinates, define CRS (in this case the CRS is “EPDS4326”, which we needed to transform using spTransform). Again, we create a map with tm_shape to visualise the results. The output shows where crime takes place compared to the CCTV location. By zooming on the map, we see that some crimes are committed directly in front of CCTVs. Although this is not conclusive evidence, this observation goes against the idea that CCTVs are effective crime deterrents.
crime_spatial <- as.data.table(crime_data_with_areas %>% filter(CrimeDateTime >= as.Date("2021-08-01") & CrimeDateTime <= as.Date("2021-08-31")))
coordinates(crime_spatial) <- c("Longitude","Latitude")
proj4string(crime_spatial) <- CRS("+init=epsg:4326")
crime_spatial <- spTransform(crime_spatial,crs.geo1)
August21Crimes_VS_CCTV <- tm_shape(baltimore) + tm_borders(col="black",alpha=0.3) + tm_layout(inner.margins = 0.1, title="Crimes committed in August 2021 VS CCTV location",frame.lwd = 5)+ tm_shape(balt_dat) + tm_dots(col="black")+tm_shape(crime_spatial)+tm_dots(col="red",alpha=0.5)
#It could be interesting to see where crime took place relative to CCTV locations in the area with the highest crime rate in August 2021
tmap_mode("view") #Use this command to have interactive maps
August21Crimes_VS_CCTV